5 research outputs found
Exploring Robot Teleoperation in Virtual Reality
This thesis presents research on VR-based robot teleoperation with a focus on remote environment visualisation in virtual reality, the effects of remote environment reconstruction scale in virtual reality on the human-operator's ability to control the robot and human-operator's visual attention patterns when teleoperating a robot from virtual reality.
A VR-based robot teleoperation framework was developed, it is compatible with various robotic systems and cameras, allowing for teleoperation and supervised control with any ROS-compatible robot and visualisation of the environment through any ROS-compatible RGB and RGBD cameras. The framework includes mapping, segmentation, tactile exploration, and non-physically demanding VR interface navigation and controls through any Unity-compatible VR headset and controllers or haptic devices.
Point clouds are a common way to visualise remote environments in 3D, but they often have distortions and occlusions, making it difficult to accurately represent objects' textures. This can lead to poor decision-making during teleoperation if objects are inaccurately represented in the VR reconstruction. A study using an end-effector-mounted RGBD camera with OctoMap mapping of the remote environment was conducted to explore the remote environment with fewer point cloud distortions and occlusions while using a relatively small bandwidth. Additionally, a tactile exploration study proposed a novel method for visually presenting information about objects' materials in the VR interface, to improve the operator's decision-making and address the challenges of point cloud visualisation.
Two studies have been conducted to understand the effect of virtual world dynamic scaling on teleoperation flow. The first study investigated the use of rate mode control with constant and variable mapping of the operator's joystick position to the speed (rate) of the robot's end-effector, depending on the virtual world scale. The results showed that variable mapping allowed participants to teleoperate the robot more effectively but at the cost of increased perceived workload.
The second study compared how operators used a virtual world scale in supervised control, comparing the virtual world scale of participants at the beginning and end of a 3-day experiment. The results showed that as operators got better at the task they as a group used a different virtual world scale, and participants' prior video gaming experience also affected the virtual world scale chosen by operators.
Similarly, the human-operator's visual attention study has investigated how their visual attention changes as they become better at teleoperating a robot using the framework.
The results revealed the most important objects in the VR reconstructed remote environment as indicated by operators' visual attention patterns as well as their visual priorities shifts as they got better at teleoperating the robot. The study also demonstrated that operators’ prior video gaming experience affects their ability to teleoperate the robot and their visual attention behaviours
Robotic surface exploration with vision and tactile sensing for cracks detection and characterisation
This paper presents a novel algorithm for crack localisation and detection
based on visual and tactile analysis via fibre-optics. A finger-shaped sensor
based on fibre-optics is employed for the data acquisition to collect data for
the analysis and the experiments. To detect the possible locations of cracks a
camera is used to scan an environment while running an object detection
algorithm. Once the crack is detected, a fully-connected graph is created from
a skeletonised version of the crack. A minimum spanning tree is then employed
for calculating the shortest path to explore the crack which is then used to
develop the motion planner for the robotic manipulator. The motion planner
divides the crack into multiple nodes which are then explored individually.
Then, the manipulator starts the exploration and performs the tactile data
classification to confirm if there is indeed a crack in that location or just a
false positive from the vision algorithm. If a crack is detected, also the
length, width, orientation and number of branches are calculated. This is
repeated until all the nodes of the crack are explored.
In order to validate the complete algorithm, various experiments are
performed: comparison of exploration of cracks through full scan and motion
planning algorithm, implementation of frequency-based features for crack
classification and geometry analysis using a combination of vision and tactile
data. From the results of the experiments, it is shown that the proposed
algorithm is able to detect cracks and improve the results obtained from vision
to correctly classify cracks and their geometry with minimal cost thanks to the
motion planning algorithm.Comment: 12 page
REAL TIME PREDICTIVE CONTROL OF UR5 ROBOTIC ARM THROUGH HUMAN UPPER LIMB MOTION TRACKING
This thesis reports the authors’ results on developing a real-time predictive
control system for an Universal Robot UR5 robotic arm through human
motion capture with a visualization environment built in the Blender Game Engine.
The UR5 is a 6 degree of freedom serial manipulator commonly used
in academia and light industry. It is a very safe robot by design that comes
at a cost of a rather limited API with very little support of real-time operation.
The motion tracking is performed by a wireless low-cost inertial motion capture
setup produced in-house. The motion tracker is an extension of author’s previous
work on replacing a forearm IMU in conventional inertial motion tracking
suits with a potentiometer in order remove anatomical constraints from corresponding
data fusion algorithms. The external controller incorporates an iTaSC
SDLS IK solver and a Python wrapped C explicit model predictive controller
generated using the Multi Parametric Toolbox. The visualisation provides the
user with the feedback on the robot’s progress towards the target. It is planned
to extend the visualisation to virtual reality in future.
Tests have shown that the robot follows the operator’s wrist position and
orientation with an average of 0.05sec. time lag in the case when the operator
moves under the robot’s velocity and acceleration limits. When the operator
moves too fast for the robot to keep up in real-time, the robot is able to catch
up with the operator with little or no overshooting. Thesis results are described
in a late-breaking report and demo accepted by the 12th annual IEEE/ACM
international conference Human-Robot Interaction (HRI2017)
REAL TIME PREDICTIVE CONTROL OF UR5 ROBOTIC ARM THROUGH HUMAN UPPER LIMB MOTION TRACKING
This thesis reports the authors’ results on developing a real-time predictive
control system for an Universal Robot UR5 robotic arm through human
motion capture with a visualization environment built in the Blender Game Engine.
The UR5 is a 6 degree of freedom serial manipulator commonly used
in academia and light industry. It is a very safe robot by design that comes
at a cost of a rather limited API with very little support of real-time operation.
The motion tracking is performed by a wireless low-cost inertial motion capture
setup produced in-house. The motion tracker is an extension of author’s previous
work on replacing a forearm IMU in conventional inertial motion tracking
suits with a potentiometer in order remove anatomical constraints from corresponding
data fusion algorithms. The external controller incorporates an iTaSC
SDLS IK solver and a Python wrapped C explicit model predictive controller
generated using the Multi Parametric Toolbox. The visualisation provides the
user with the feedback on the robot’s progress towards the target. It is planned
to extend the visualisation to virtual reality in future.
Tests have shown that the robot follows the operator’s wrist position and
orientation with an average of 0.05sec. time lag in the case when the operator
moves under the robot’s velocity and acceleration limits. When the operator
moves too fast for the robot to keep up in real-time, the robot is able to catch
up with the operator with little or no overshooting. Thesis results are described
in a late-breaking report and demo accepted by the 12th annual IEEE/ACM
international conference Human-Robot Interaction (HRI2017)